-
Notifications
You must be signed in to change notification settings - Fork 13
feat: add pipeline parallel #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
7552d8e to
1083190
Compare
9c47bc2 to
4e47615
Compare
09735f2 to
c21cd75
Compare
9354e7a to
47e0573
Compare
|
|
||
| void SGD::Step() { | ||
| for (auto param : params_) { | ||
| if (!param->grad()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个情况会出现吗?如果要做非空检查的话是不是后面的 Adam::Step 也得加上?
| std::vector<std::shared_ptr<Tensor>> target_mbs(num_micro_batches_); | ||
| if (stage_->IsFirstStage()) { | ||
| { | ||
| autograd::NoGradGuard no_grad; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check 一下 no_grad 是否有必要,全模型的输入输出 input/target 的 requires_grad 是 false,相关操作不会构图
|
|
||
| if (stage_->IsLastStage()) { | ||
| { | ||
| autograd::NoGradGuard no_grad; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
|
|
||
| std::vector<std::shared_ptr<Tensor>> outputs; | ||
| for (auto t : input_tensors) { outputs.push_back(t); } | ||
| return outputs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是不是可以原地返回 input_tensors,没有必要再构造一个 outputs
3391267 to
b851306
Compare









1. 主要修改:
新增支持Pipeline Parallel(PP)特性,支持把原始model按最外层结构平均分配到多个rank,各个rank持有部分layer以及相应parameters,训练过程forward按计算图顺序从rank 0向rank n计算,barkward从rank n向rank 0计算,各个rank之间通过点对点通信。
net.cc: 根据PP_size和pp_rank在构建模型时构建属于本rank的子块和对应参数。

pipeline_parallel.cc: PipelineParallel封装model,每个rank对应1个PipelineParallel,完成关联PipelineSchedule、PipelineStage、Optimizers,提供TrainStep函数为训练入口,调用调度器的训练方法Step。
pipeline_schedule.cc: PipelineSchedule调度器基类,提供Step函数为完整一轮训练的方法;ScheduleGPipe为调度器子类GPipe实现(示意图如下),StepMicrobatches为调度具体实现。
pipeline_stage.cc: PipelineStage表示当前rank所持有的子图,提供ForwardOneChunk方法执行当前子图内部的forward的计算。
send_recv.cc:ISend和IRecv是两个autograd节点,用于在rank间定向发送张量,依赖于autograd机制,
在rank x的反向中最后一步为发送梯度到rank x-1,然后调用rank x-1上的ISend::Backward接收梯度,并开始rank x-1的反向过程。2. 命令参数:
--pipeline_parallel #uint32类型,表示打开pipeline parallel,参数值为并行的设备数,即stage数量
示例:
./llama3 --input_bin <input_path> --llmc_filepath <model_path> --device cuda --nthread_per_process 8 --batch_size 10 --total_batch_size 5120 --num_iteration 10 --pipeline_parallel 8